ニューラルネットワークモデリング
Neural Network Modeling
P2-1-233
ベイジアンネットを用いた運動野と感覚野の働きを統一的に説明する計算論的モデル
Unified computational model of motor-cotex and sensory-cortex based on Bayesian Networks

○一杉裕志1
○Yuuji Ichisugi1
産業技術総合研究所1
National Institute of Advanced Industrial Science and Technology(AIST)1

We describe a computational model of motor areas of the cerebral cortex. The model combines Bayesian networks, competitive learning and reinforcement learning. We found that decision-making using MPE (Most Probable Explanation) approximates the ideal decisionmaking in this model, which suggests that MPE calculation is a promising model of not only sensory-cortex recognition, already addressed by previous works, but also motor-cortex decision-making.Since MPE can be approximately calculated with linear time complexity, by the approximate belief revision algorithm proposed by us before, this model can explain why the brain works so efficiently. In the future, we will extend this model to a multiple-hidden node and multiple-layered model that is more close to the actual brain and that can realize more complex behavior of motor-areas and high-level decision-making of prefrontal areas. We also aim to show the practical usefulness of efficient MPE decision-making in some nontrivial application.The relation between our model and the deep learning technology which attracts attention in the field of machine learning these days is also discussed. Reference: Similarities and differences between cerebral cortex and deep learning. (In Japanese) http://staff.aist.go.jp/y-ichisugi/rapid-memo/brain-deep-learning.html
P2-1-234
モジュール構造を用いた冗長制御系の学習モデル
A Learning Model for Redundant Control Using Module Structure

○福村直博1, 山尾拓也1
○Naohiro Fukumura1, Takuya Yamao1
豊橋技術科学大学 情報・知能工学系1
Dept Computer Science, Toyohashi Univ of Tech, Toyohashi1

Human execute various motions using many joints and muscles. In order to realize these motions, it is necessary to solve a problem of the redundant degrees of freedom (DOFs). Freezing hypothesis means that neural control system eliminates redundant DOFs by fixating specific joints. On the other hand, human can learn in various novel environments. It is thought that the human's brain has multiple control modules and human realizes various tasks by switching these modules depending on the situation. Based on these hypotheses, we propose a model to control DOFs using a module structure. This model has multiple modules corresponding to different DOFs. To verify this model, we perform computer simulation of controlling an inverted pendulum attached to the end of a two-link manipulator as a redundant control problem. In this simulation, the model has two modules. A lower DOFs module controls only the elbow joint and a higher DOFs module control the elbow and shoulder joint. In the first stage, shoulder joint is controlled to fixate at a specified angle by a PID controller demonstrating freezing and the lower DOFs module learns the simple control rule for an elbow joint quickly by Actor-Critic learning method. During the learning of the lower DOFs module, an output for the elbow joint of the higher DOFs module learns by supervised learning using the output of the low DOFs module. After the learning of the low DOFs module, gain of the PID controller for the shoulder decreases like a release of freezing and the high DOFs module starts to learn more dexterous control rules by Actor-Critic learning method. The control module is selected based on the TD-error that represents the validity of the controller of the DOFs modules. As a result of the simulation, module smoothly change with little decrease of control performance and the proposed model could learn more effectively than learning of only a high DOFs module from the beginning.
P2-1-235
レプリカ交換モンテカルロ法とNESTシミュレータによるスパイキングニューラルネットワークの確率論的パラメータ推定法
A stochastic parameter estimation method for spiking neural networks with the replica exchange Monte Carlo scheme coupled with the NEST simulator

○大塚誠1, 吉本潤一郎1, 銅谷賢治1
○Makoto Otsuka1, Junichiro Yoshimoto1, Kenji Doya1
沖縄科学技術大・神経計算1
NC Unit, OIST, Okinawa1

With increasing computational capability of hardware, large-scale simulations of biologically realistic neural networks become feasible. To make them reliable in the reproduction of experimental data on the network level as well as the single-neuron level, parameter tuning is inevitable. However, the theoretically sound methodology is still immature. In this report, we propose a parameter estimation paradigm that copes with the difficulties arising from the huge dimensionality of the simulation models and the uncertainty of neuronal behaviors. In the proposed method we assume that a desired behavior of the simulation model is given as a probability distribution of the statistics of spikes emitted by the neurons. The typical example is an empirical distribution of mean firing rates of individual neurons recorded in experiments. The model parameters are estimated so as to minimize the Kullback-Leibler divergence (KLD) between this desired distribution and an empirical distribution obtained by model simulations. Since the fitness of the parameters is evaluated not at a single point but over the space of all possible realizations, over-fitting is expected to be avoided. To implement the method, two critical issues have be solved: the multimodality of KLD and the huge computation for model simulations. In our implementation, these issues are solved by introducing the replica exchange Monte Carlo method and the highly parallelized simulations by the NEST (NEural Simulation Tool), respectively. To demonstrate efficiency and feasibility of the method, parameters of small-scale network and large-scale network models are estimated from generated spike data and from experimentally obtained spike data, respectively.
P2-1-236
異方的抑制を持つCA3リカレントネットワークに生成される指向性シータ進行波
Directional traveling theta wave organized in CA3 recurrent network with anisotropic inhibition

○佐村俊和1, 酒井裕1, 林初男2, 相原威1
○Toshikazu Samura1, Yutaka Sakai1, Hatsuo Hayashi2, Takeshi Aihara1
玉川大学 脳科学研究所1, 九州工業大学2
Tamagawa University Brain Science Institute, Tokyo, Japan1, Kyushu Institute of Technology, Kitakyushu, Japan2

In the hippocampus, theta wave travels along the longitudinal axis of the hippocampus. It has been suggested that one possible mechanism of the theta wave is directional spike propagation in the hippocampal CA3. The hippocampal CA3 is considered as a recurrent neural network because neurons are connected to each other. Yoshida and Hayashi have demonstrated that CA3 recurrent network model composed of biophysical neurons causes radial spike propagation when connections between pyramidal neurons are updated through spike-timing dependent plasticity. The spike wave is self-organized in the network and spontaneously propagates repeatedly at a theta frequency. Furthermore, theta waves are organized anywhere according to inputs. However, the organized theta wave has no directionality. On the other hand, we have demonstrated that a recurrent network composed of spiking neurons with anisotropic inhibition causes directional spike propagation. The anisotropy of inhibition comes from the anisotropy of axon projection from inhibitory interneurons. Although the directional waves are organized and caused by external inputs in this network, the organized wave is not rhythmic. In this study, we introduced the anisotropy of inhibition into Yoshida and Hayashi CA3 model. We show that directional traveling theta waves are organized and propagate repeatedly at a theta frequency in the network. The theta wave propagated along a direction in which interneurons have long axons. These results suggest that the anisotropic inhibition in the hippocampal CA3 organizes directional traveling theta waves spontaneously. Indeed, O-LM interneurons in the hippocampal CA3 have anisotropic axon projections. Furthermore, in this network, the directional traveling waves are reorganized by external inputs. Therefore, the directional traveling theta wave in the hippocampal CA3 may code some information coming from the dentate gyrus and the entorhinal cortex.
P2-1-237
神経回路モデル上でのフレーム生成を支える分布等価群(DEGs)
Distribution equivalence groups for supporting frame generation in neural network models

○山川宏1
○Hiroshi Yamakawa1
富士通研究所1
Software System Laboratories, FUJITSU LABORATORIES LTD., Kanagawa1

Current narrow AI technologies cannot adapt tasks in various domains, then the author want to construct an artificial general intelligence (AGI) system which overcome that limitation. I believe that AGI system must generate various domain-specific frames autonomously. Frames are knowledge representations which consist of sets of variables. In the frame generation procedure, a significant subprocedure, that of frame candidate generation by variable assimilation, has not yet been realized because of the huge hypothesis space. Representations that can express various relationships among variables in the system can assist in developing this subprocedure, but no such representations have heretofore been known. Through intimate collaboration with neuroscientists, the author searched for clues for such representations in the neuroscience field. Then, the author examined neuroscientific research results to conclude the following: (A)hippocampal formation (HCF) is in charge of frame generation, and (B) distribution equivalent groups (DEGs) are the representations used by HCF for expressing variable relationships. Here, I define a Distribution Equivalent Group (DEG), which consists of about ten cases set in a multidimensional subspace in consideration of variable exchange symmetry. This symmetry means that if two distributions are identical when exchanging variables within the subspace, then they belong to same DEG. I have already estimated that DEGs exhibit sufficient diversity to represent relations among variables, by using binary-variable assumption. In this paper, DEGs were extracted from multidimensional sequential data, and their eligibilities as relationship representation were estimated, because I aim to construct neural network model which can generate frame dynamically.
P2-1-238
シナプス抑圧が連想記憶モデルの偽記憶状態に及ぼす影響
The effect of synaptic depression on spurious state in associative memory model

○村田伸1, 大坪洋介1,2, 永田賢二1, 岡田真人1,3
○Shin Murata1, Yosuke Otsubo1,2, Kenji Nagata1, Masato Okada1,3
東京大学新領域1, 学振2, 理研BSI3
Univ of Tokyo, Tokyo1, JPSJ Research Fellow2, RIKEN BSI, Saitama3

An associative memory model is one of typical neural network model that has discretely fixed point attractors as stored memory patterns. These patterns are usually embedded in synaptic weights by using Hebbian rule. Given an initial state near a memory pattern, the network converges the memory pattern. However, given another initial state far from the memory patterns, the network might converge completely different equilibrium states from all the memory patterns. The former is called memory state and the latter is called spurious states. It is difficult to distinguish memory states from spurious states since both are equilibrium states in the associative memory model. While the synaptic weights between neurons are time constant in this model, synaptic efficacy is known to change within a short time scale dynamically. Neurophysiological experiments show that high-frequency presynaptic inputs decrease synaptic efficacy between neurons. This phenomenon is called synaptic depression, which is one of short-term synaptic plasticity. The recent studies showed that the synaptic depression instabilizes the embedded memory patterns and then induces a transition among these. From these studies, the synaptic depression could affect either memory state or spurious state, so that the synaptic depression enables us to distinguish these. In this study, we investigate the dynamics of the associative memory model with the synaptic depression by using Monte Carlo simulation.
As a result, the synaptic depression does not affect the memory states but induce an oscillation in spurious states. Therefore, it becomes possible to distinguish both states by investigating the dynamics of network with the synaptic depression. By using principal component analysis on the neurons data of spurious states where the network oscillates, we also find that the network oscillates in a circle on two-dimensional plane.
P2-1-239
MT細胞の新しい計算論-MT細胞は本当に速度選択性があるのか?-
A novel computational theory of MT neurons : Do MT neurons actually prefer their 'preferred speeds'?

○中村大樹1, 佐藤俊治1
○Daiki Nakamura1, Shunji Satoh1
電通大院・情報システム・情報メディアシステム1
Grad. School of IS, Univ of Electro-Communications, Tokyo, Japan1

Middle temporal (MT) neurons are believed to have a preferred speed. An MT neuron reaches its maximum firing rate when an image speed equals its preferred speed: the response curves of MT neurons illustrate unimodal functions with respect to image speeds. However, various image properties such as image contrast and texture affect the preferred speed. A simple question confronts us: Do MT neurons actually prefer their 'preferred speeds'? If so, then a preferred speed should not be affected by image properties.
To explain why such neurons show their maximum firing rate at predefined speeds, we make a counter-proposal of computational theory of MT neurons. Our theory is based on an assumption that MT neurons employ Lucas-Kanade method, which is an engineering method for optical flow calculation, and which is independent of brain science. The output of our MT neuron model based on this assumption is proportional to the estimated speed given by Lucas-Kanade method, which has no 'preferred speed'.
We reproduced unimodal functions of response curves by numerical simulation of our model. This fact means that MT models require no parameter related to a preferred speed. The speeds taking the maximum outputs are merely upper limitation of correct estimation. Thereafter, we named that upper limitation 'critical speed'. When an image speed is under the critical speed, MT neurons can respond linearly to the true speed. However, when an image speed is over the critical speed, MT neurons fail to estimate the correct speeds. Such over-speeds decrease the firing rate of MT neurons. Therefore, linearly increasing response and decreasing response, which are distinguished by 'critical speed', illustrate unimodal functions. Furthermore, different 'preferred speeds' are different 'critical speeds' with multi-resolution image processing. For example, the 'critical speed' under a high-resolution condition is lower than under a low-resolution condition.
P2-1-240
閾値関数の非線形性が及ぼす高次発火相関構造への影響について ― 一次視覚野をモデルとして
The effect of threshold non-linearity on the structured higher-order correlations

○五十嵐康彦1, 岡田真人1,2
○Yasuhiko Igarashi1, Masato Okada1,2
東京大学大学院 新領域創成科学研究科 複雑理工学専攻1, 理化学研究所BSI2
Graduate School of Frontier Sciences, The University of Tokyo, Japan1, Brain Science Institute, RIKEN, Japan2

The detailed nature of the code of neural populations is determined by the connectivity among cells. To understand the complex relationship between the structure of a neural network and the code of neural populations, several experimental studies have recently reported that higher-order correlated patterns of activity are often observed in the brain by sequences of action potentials of neural populations [Ohiorhenuan it et al. 2010, 2011; Ganmor it et al. 2011]. However, very little are theoretically known about the relationship in a network of structural connections linking sets of neurons and the effect of higher-order correlation on the information processing. We constructed a theory on the origin of structured higher-order neural activities in a network model that can elucidate experimental observations. We particularly focus on the comparison between our theoretical results and the electrophysiological experiment reported by Ohiorhenuan et al. involving the primary visual cortex (V1) [Ohiorhenuan it et al. 2010, 2011]. Unlike a homogeneous network [Amari et al. 2003; Macke it et al. 2011], a network with columnar structure can provide not only the tuning curve of firing rates but also the relationship between higher-order correlations. We also found that the heterogeneous structure can dynamically control the structure of higher-order correlations and generate both sparse and synchronized neural activity. We expect our study to promote theoretical studies on how structured interaction affects higher-order correlated neural activity and information processing in the brain.
P2-1-241
神経回路モデルでの状態遷移確率の学習
Learning of state transition probability: a neural network model

○齋藤大1, 瀧山健1, 岡田真人1,2
○Hiroshi Saito1, Ken Takiyama1, Masato Okada1,2
東大 新領域 複雑理工1, 理研BSI2
Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, Chiba, Japan1, RIKEN Brain Science Institute, Wako, Saitama, Japan2

Humans and animals can predict future states of environments based on acquired knowledge about the environments, e.g., in the rainy season, we predict that it continues to rain, and, in the dry season, we predict that it continues to be sunny. In these examples, we predict state (weather) based on learned dynamics of environments (transition probability of weather). In fact, recent brain imaging study suggests that the transition probabilities are encoded in the brain [Glascher et al., 2010].

However, how the brain learns the transition probabilities, or dynamics of environments, is still controversial. We proposed a Hebbian algorithm which enables a feedforward neural network to learn the transition probabilities. Based on analytical and numerical calculations, we confirmed that our model learns the transition probabilities independent of whether the states are completely or incompletely observable.

We further investigated an explanatory power of our model. To make our model more realistic, eligibility trace, or a low-pass filter likely implemented in the brain, is additionally defined. The eligibility trace maintains past states and modifies learned transition probabilities to accurately predict the future states and to rapidly adapt to the drastic change of environment, i.e., the change of transition probabilities. Furthermore, when a random-dot motion discrimination task was simulated, neural activities of our model resembled neural activities of monkey's lateral intraparietal region. It should be noted that, in contrast to previous studies, we do not assume that each neuron encodes a log probability of a state.

In summary, we proposed a Hebbian learning algorithm that enables a neural network model to learn the transition probabilities. Not only our algorithm can accurately estimate the transition probabilities, but also our network model can explain actual neural activities in a decision making task.
P2-1-242
層内共通ノイズをもつフィードフォワードネットワークの層間相関
Inter-layer correlation in a feed-forward network induced by intra-layer common noise

○唐木田亮1, 五十嵐康彦1, 永田賢二1, 岡田真人1,2
○Ryo Karakida1, Yasuhiko Igarashi1, Kenji Nagata1, Masato Okada1,2
東大院・新領域・複雑理工1, 理研脳総研2
Grad Sch of Front Sci, Univ of Tokyo, Kashiwa, Japan1, RIKEN Brain Sci Inst, Wako, Japan2

Physiological experiments have recently suggested that correlated neural activities are realized by common noise in a neural population [Yu et al. 2010, Hansen et al. 2012]. Hansen et al. demonstrated that the common noise occurs in each cortical layer and increases spike correlation in a multi-layer network. The common noise within a layer, which we call intra-layer common noise, induces correlation within the layer, intra-layer correlation. They also indicated the appearance of correlation between different layers, inter-layer correlation, in the presence of the common noise. As for the intra-layer correlation, theoretical studies have already revealed that the common noise facilitates a synchronous firing and then induces the intra-layer correlation in a feed-forward network [Amari et al. 2003, Macke et al. 2011]. However, the previous theoretical studies have not evaluated the effects of the intra-layer common noise on the statistical properties of the inter-layer correlation. In this study, we construct a homogeneous multi-layer feed-forward network so as to evaluate the inter-layer correlation with the intra-layer common noise. We derive a joint probability distribution of firing rates in the analytical way and then calculate theoretical values of the inter-layer correlation. The theoretical results reveal that the intra-layer common noise generates not only intra-layer correlation but also the inter-layer correlation, which agree with results of simulation. We estimate order of N, the number of neurons, and parameter dependence of the inter-layer correlation. Moreover, we compare the strength of the inter-layer correlation with that of the intra-layer correlation. Our results would provide theoretical insight into the experimental results of spike correlation in a multi-layer network.
P2-1-243
スパイキングニューラルネットワークを用いたトポロジーに応じた複数感覚統合
Structural connectivity influences how multisensory inputs are integrated in spiking neural networks

○藤井敬子1, 山田康智2,3, 國吉康夫2
○Keiko Fujii1, Yasunori Yamada2,3, Yasuo Kuniyoshi2
東京大院・学府1, 東京大院・情理2, 学振・特別研究員(DC1)3
Grad. School of Interdisciplinary Information Studies, The Univ. of Tokyo, Japan1, Grad. School of Info. Sci. & Tech., The Univ. of Tokyo, Japan2, JSPS research fellow3

Multisensory integration is a fundamental function of information processing observed in various cortical regions. However, how structural connectivity of neural networks influences multisensory integration at both the population and single-neuron levels is largely unknown. We examined the influence of structural connectivities on integration when network architectures were gradually changed from random to highly modularized. We ran computer simulations and investigated how multisensory inputs were integrated in our networks comprised of leaky integrate-and-fire neurons with spike-timing-dependent synaptic plasticity. At the population level, all networks showed significantly different activities in the case of multisensory inputs compared to linear summations of those of single-sensory inputs, indicating that all networks can integrate multisensory inputs. At the single-neuron level, in a random network, multimodal neurons, which are significantly sensitive to multisensory inputs, were dominant, while there were few unimodal neurons, which are significantly sensitive to specific single-sensory inputs. However, as the modularities of initial network structures increased, the number of unimodal neurons gradually increased while the number of multimodal neurons decreased. Further, we found that spatio-temporal firing patterns in random and modular networks were also different. In random networks, multisensory inputs were encoded independently of the original unimodal inputs. As modularity increased, however, multisensory inputs were increasingly encoded so that their unimodal firing patterns were partially preserved. Our results suggest that network structural connectivities guide how multisensory inputs are integrated.

上部に戻る 前に戻る